28 research outputs found
TACKLING INSIDER THREATS USING RISK-AND-TRUST AWARE ACCESS CONTROL APPROACHES
Insider Attacks are one of the most dangerous threats organizations face today. An insider attack occurs when a person authorized to perform certain actions in an organization decides to abuse the trust, and harm the organization by causing breaches in the confidentiality, integrity or availability of the organization’s assets. These attacks may negatively impact the reputation of the organization, its productivity, and may incur heavy losses in revenue and clients. Preventing insider attacks is a daunting task. Employees need legitimate access to effectively perform their jobs; however, at any point of time they may misuse their privileges accidentally or intentionally. Hence, it is necessary to develop a system capable of finding a middle ground where the necessary privileges are provided and insider threats are mitigated. In this dissertation, we address this critical issue.
We propose three adaptive risk-and-trust aware access control frameworks that aim at thwarting insider attacks by incorporating the behavior of users in the access control decision process. Our first framework is tailored towards general insider threat prevention in role-based access control systems. As part of this framework, we propose methodologies to specify risk-and-trust aware access control policies and a risk management approach that minimizes the risk exposure for each access request. Our second framework is designed to mitigate the risk of obligation-based systems which are difficult to manage and are particularly vulnerable to sabotage. As part of our obligation-based framework, we propose an insider-threat-resistant trust computation methodology. We emphasize the use of monitoring of obligation fulfillment patterns to determine some psychological precursors that have high predictive power with respect to potential insider threats. Our third framework is designed to take advantage of geo-social information to deter insider threats. We uncover some insider threats that arise when geo-social information is used to make access control decisions. Based on this analysis, we define an insider threat resilient access control approach to manage privileges that considers geo-social context. The models and methodologies presented in this dissertation can help a broad range of organizations in mitigating insider threats
A Hybrid Approach to Privacy-Preserving Federated Learning
Federated learning facilitates the collaborative training of models without
the sharing of raw data. However, recent attacks demonstrate that simply
maintaining data locality during training processes does not provide sufficient
privacy guarantees. Rather, we need a federated learning system capable of
preventing inference over both the messages exchanged during training and the
final trained model while ensuring the resulting model also has acceptable
predictive accuracy. Existing federated learning approaches either use secure
multiparty computation (SMC) which is vulnerable to inference or differential
privacy which can lead to low accuracy given a large number of parties with
relatively small amounts of data each. In this paper, we present an alternative
approach that utilizes both differential privacy and SMC to balance these
trade-offs. Combining differential privacy with secure multiparty computation
enables us to reduce the growth of noise injection as the number of parties
increases without sacrificing privacy while maintaining a pre-defined rate of
trust. Our system is therefore a scalable approach that protects against
inference threats and produces models with high accuracy. Additionally, our
system can be used to train a variety of machine learning models, which we
validate with experimental results on 3 different machine learning algorithms.
Our experiments demonstrate that our approach out-performs state of the art
solutions
Federated Unlearning: How to Efficiently Erase a Client in FL?
With privacy legislation empowering the users with the right to be forgotten,
it has become essential to make a model amenable for forgetting some of its
training data. However, existing unlearning methods in the machine learning
context can not be directly applied in the context of distributed settings like
federated learning due to the differences in learning protocol and the presence
of multiple actors. In this paper, we tackle the problem of federated
unlearning for the case of erasing a client by removing the influence of their
entire local data from the trained global model. To erase a client, we propose
to first perform local unlearning at the client to be erased, and then use the
locally unlearned model as the initialization to run very few rounds of
federated learning between the server and the remaining clients to obtain the
unlearned global model. We empirically evaluate our unlearning method by
employing multiple performance measures on three datasets, and demonstrate that
our unlearning method achieves comparable performance as the gold standard
unlearning method of federated retraining from scratch, while being
significantly efficient. Unlike prior works, our unlearning method neither
requires global access to the data used for training nor the history of the
parameter updates to be stored by the server or any of the clients
HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning
Federated learning has emerged as a promising approach for collaborative and
privacy-preserving learning. Participants in a federated learning process
cooperatively train a model by exchanging model parameters instead of the
actual training data, which they might want to keep private. However, parameter
interaction and the resulting model still might disclose information about the
training data used. To address these privacy concerns, several approaches have
been proposed based on differential privacy and secure multiparty computation
(SMC), among others. They often result in large communication overhead and slow
training time. In this paper, we propose HybridAlpha, an approach for
privacy-preserving federated learning employing an SMC protocol based on
functional encryption. This protocol is simple, efficient and resilient to
participants dropping out. We evaluate our approach regarding the training time
and data volume exchanged using a federated learning process to train a CNN on
the MNIST data set. Evaluation against existing crypto-based SMC solutions
shows that HybridAlpha can reduce the training time by 68% and data transfer
volume by 92% on average while providing the same model performance and privacy
guarantees as the existing solutions.Comment: 12 pages, AISec 201
DeTrust-FL: Privacy-Preserving Federated Learning in Decentralized Trust Setting
Federated learning has emerged as a privacy-preserving machine learning
approach where multiple parties can train a single model without sharing their
raw training data. Federated learning typically requires the utilization of
multi-party computation techniques to provide strong privacy guarantees by
ensuring that an untrusted or curious aggregator cannot obtain isolated replies
from parties involved in the training process, thereby preventing potential
inference attacks. Until recently, it was thought that some of these secure
aggregation techniques were sufficient to fully protect against inference
attacks coming from a curious aggregator. However, recent research has
demonstrated that a curious aggregator can successfully launch a disaggregation
attack to learn information about model updates of a target party. This paper
presents DeTrust-FL, an efficient privacy-preserving federated learning
framework for addressing the lack of transparency that enables isolation
attacks, such as disaggregation attacks, during secure aggregation by assuring
that parties' model updates are included in the aggregated model in a private
and secure manner. DeTrust-FL proposes a decentralized trust consensus
mechanism and incorporates a recently proposed decentralized functional
encryption (FE) scheme in which all parties agree on a participation matrix
before collaboratively generating decryption key fragments, thereby gaining
control and trust over the secure aggregation process in a decentralized
setting. Our experimental evaluation demonstrates that DeTrust-FL outperforms
state-of-the-art FE-based secure multi-party aggregation solutions in terms of
training time and reduces the volume of data transferred. In contrast to
existing approaches, this is achieved without creating any trust dependency on
external trusted entities